ai journalism
CNET's new guidelines for AI journalism met with union pushback
Nearly seven months after it began publishing machine-generated stories without disclosing their true authorship (or lack thereof) to readers, CNET has finally, publicly changed its policy on the use of AI in its journalistic endeavors. In short, stories written by its in-house artificial intelligence -- which it calls Responsible AI Machine Partner (RAMP) -- are no more, but the specter of AI in its newsroom is far from exorcised. The site indicates, however, that there are still two broad categories of pursuits where RAMP will be deployed. The first, which it calls "Organizing large amounts of information" provides an example that seems more authorial than that umbrella descriptor lets on. "RAMP will help us sort things like pricing and availability data and present it in ways that tailor information to certain audiences. Without an AI assist, this volume of work wouldn't be possible."
AI journalism is getting harder to tell from the old-fashioned, human-generated kind Ian Tucker
A couple of weeks ago I tweeted a call-out for freelance journalists to pitch me feature ideas for the science and sechnology section of the Observer's New Review. Unsurprisingly, given headlines, fears and interest in LLM (large language model) chatbots such as ChatGPT, many of the suggestions that flooded in focused on artificial intelligence – including a pitch about how it is being employed to predict deforestation in the Amazon. One submission however, from an engineering student who had posted a couple of articles on Medium, seemed to be riding the artificial intelligence wave with more chutzpah. He offered three feature ideas – pitches on innovative agriculture, data storage and the therapeutic potential of VR. While coherent, the pitches had a bland authority about them, repetitive paragraph structure, and featured upbeat endings, which if you've been toying with ChatGPT or reading about Google chatbot Bard's latest mishaps, are hints of chatbot-generated content.
Eighteen pitfalls to beware of in AI journalism
Reporting about AI is hard. When news articles uncritically repeat PR statements, overuse images of robots, attribute agency to AI tools, or downplay their limitations, they mislead and misinform readers about the potential and limitations of AI. We noticed that many articles tend to mislead in similar ways, so we analyzed over 50 articles about AI from major publications, from which we compiled 18 recurring pitfalls. We hope that being familiar with these will help you detect hype whenever you see it. We also hope this compilation of pitfalls will help journalists avoid them.
The Impact of Creative AI – FE News
The UK government has highlighted Artificial Intelligence as one of the four'Grand Challenges' which will transform our future. However, what this transformation will look like is very much unknown, but we are standing on the edge of a technological revolution no one can truly comprehend. Humans generally have a tainted representation of AI in stories; AI is created to serve humans, but it becomes aware that we are irrelevant, and tries to destroy us. At SXSW 2018, Tesla's Elon Musk said the current state of AI regulation is "insane," calling the technology "more dangerous than nukes." But why are we so scared of AI, and how could it impact our jobs, or even our humanity?
- Media > Film (1.00)
- Law (0.67)
- Leisure & Entertainment > Games > Chess (0.49)
- Government > Regional Government > Europe Government > United Kingdom Government (0.35)
AI journalism: What is it and should journalists see it as a threat?
For many of us the term "artificial intelligence" still belongs in the realms of science-fiction and brings to mind the domineering Skynet in the Terminator films or the malevolent Hal in 2001: A Space Odyssey. A recent Press Gazette poll asking readers if they think AI robots are a threat to journalism or an opportunity found the majority (69%) of more than 1,200 voters saw AI as a threat. But while what's known as "artificial general intelligence" – machines akin or superior to human intelligence – does not yet exist and may never be fully realised, AI tools are already in use in the news industry today. These tools help in the gathering, production and distribution of information. They fall broadly under the definition of "machine learning", which is a subset of AI, where computers handle specific tasks and are able to learn and improve as they go, independent of human help.
The future of AI journalism is less hyperbole and smarter readers
Today we launched Neural, our new home for human-centric AI news and analysis. While we're celebrating the culmination of years of hard work from our behind-the-scenes staff, I'm taking the day to quietly contemplate the future of AI journalism. The Guardian's Oscar Schwartz wrote an article in 2018 titled "The discourse is unhinged: how the media gets AI alarmingly wrong." In it, he discusses the 2017 hype-explosion surrounding Facebook's AI research lab developing a pair of chat bots that created a short-hand language for negotiating. In reality, the chat bots' behavior was remarkable but not entirely unexpected.
- Information Technology > Communications > Social Media (0.88)
- Information Technology > Artificial Intelligence > The Future (0.61)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.61)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.58)
AI Journalism: A Second Chance for News Media - Robot Writers AI
Artificial intelligence generated writing and similar tools are offering journalists a second chance to reconnect with the public and up-their-game, according to Charlie Becket. The researcher is director of the Media Policy Project, sponsored by the London School of Economics and Political Science. "AI in its broadest sense provides all sorts of opportunities for journalism – and journalism needs all the help it can get right now," Beckett says. The reason: "A few years ago, a couple of companies said they would be able to replace journalists within a few years, Van der Lee, says. It's a boast that turned out to be untrue. Instead, AI systems like Van der Lee's – which can generate short sports stories, which detail results of thousands of local soccer matches on a regular basis – are all about doing rote work. That frees-up journalists to write more complex, more insightful news stories and features, according to Van der Lee. "Robots will never write as well as people," Van der Lee says. It's an AI editor that works in popular Web browsers. The new feature on the AI editing tool can offer suggestions to create a writing tone that is neutral, confident, joyful, optimistic, friendly, urgent, analytical or respectful. Currently, Grammarly's tone analysis is available for Google Chrome users only. The toolmaker's plan in coming months is to roll-out the feature to Firefox, Safari and other popular browsers. It's a step-by-step guide on how to get started using AI-generated writing for public relations, marketing and similar endeavors in content generation. "Today, instead of three-to-five hours, reports take us 10 minutes to write," Moehring says. "The reports are delivered on the first business day of the month.
- Asia > Middle East > UAE (0.06)
- North America > United States (0.05)
- Europe > Germany (0.05)
- Asia > Singapore (0.05)
- Media > News (1.00)
- Leisure & Entertainment > Sports > Soccer (0.55)
r/MachineLearning - [D] Irresponsible anthropomorphism is killing AI journalism
The current state of media coverage of AI is fixated on constructing a compelling narrative to readers, and often personifies models well beyond their capabilities. This is to the extent that articles almost always end up reading like every classifier is some form of limited AGI. Take "Meet Norman the Psychopathic AI", an article by the BBC, whom I generally consider quite capable journalists. While the research methodology and some of the implications are discussed in the article, the majority of laypeople who encounter the article will likely erroneously conclude that Norman possesses beliefs, a worldview, and some dark outlook on humanity. Some readers will think "Norman" is violent or dangerous, with a mind of his own.
China's news agency is reinventing itself with AI
On the heels of billions of yuan of investment burrowed into China's artificial intelligence scene, China's state news agency has announced that it is rebuilding its newsroom to emphasize human-machine collaboration. There are already elements of this in quite a few newsrooms but this is the first announcement (I've seen) of a large news org rearranging itself around AI…. https://t.co/oK7pZbj158 Xinhua News Agency president Cai Mingzhao said Xinhua will build a "new kind of newsroom based on information technology and featuring human-machine collaboration." The agency has also introduced the "Media Brain" platform to integrate cloud computing, the Internet of Things, AI and more into news production, with potential applications "from finding leads, to news gathering, editing, distribution and finally feedback analysis." The agency's announcement was sparse on details, but it's the latest component of a deep push into AI by China.
- Asia > China > Beijing > Beijing (0.07)
- North America > United States > California (0.05)
- Asia > Taiwan (0.05)
- Asia > India (0.05)
- Media > News (1.00)
- Information Technology (1.00)
- Information Technology > Communications > Networks (0.36)
- Information Technology > Artificial Intelligence > Speech (0.32)